Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
8th International Conference on Modelling and Development of Intelligent Systems, MDIS 2022 ; 1761 CCIS:173-187, 2023.
Article in English | Scopus | ID: covidwho-2281513

ABSTRACT

Creative industries were thought to be the most difficult avenue for Computer Science to enter and to perform well at. Fashion is an integral part of day to day life, one necessary both for displaying style, feelings and conveying artistic emotions, and for simply serving the purely functional purpose of keeping our bodies warm and protected from external factors. The Covid-19 pandemic has accelerated several trends that had been forming in the clothing and textile industry. With the large-scale adoption of Artificial Intelligence (AI) and Deep Learning technologies, the fashion industry is at a turning point. AI is now in charge of supervising the supply chain, manufacturing, delivery, marketing and targeted advertising for clothes and wearable and could soon replace designers too. Clothing design for purely digital environments such as the Metaverse, different games and other on-line specific activities is a niche with a huge potential for market growth. This article wishes to explain the way in which Big Data and Machine Learning are used to solve important issues in the fashion industry in the post-Covid context and to explore the future of clothing and apparel design via artificial generative design. We aim to explore the new opportunities offered to the development of the fashion industry and textile patterns by using of the generative models. The article focuses especially on Generative Adversarial Networks (GAN) but also briefly analyzes other generative models, their advantages and shortcomings. To this regard, we undertook several experiments that highlighted some disadvantages of GANs. Finally, we suggest future research niches and possible hindrances that an end user might face when trying to generate their own fashion models using generative deep learning technologies. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2.
IEEE Journal on Selected Areas in Communications ; 41(1):107-118, 2023.
Article in English | Scopus | ID: covidwho-2245641

ABSTRACT

Video represents the majority of internet traffic today, driving a continual race between the generation of higher quality content, transmission of larger file sizes, and the development of network infrastructure. In addition, the recent COVID-19 pandemic fueled a surge in the use of video conferencing tools. Since videos take up considerable bandwidth ( ∼ 100 Kbps to a few Mbps), improved video compression can have a substantial impact on network performance for live and pre-recorded content, providing broader access to multimedia content worldwide. We present a novel video compression pipeline, called Txt2Vid, which dramatically reduces data transmission rates by compressing webcam videos ('talking-head videos') to a text transcript. The text is transmitted and decoded into a realistic reconstruction of the original video using recent advances in deep learning based voice cloning and lip syncing models. Our generative pipeline achieves two to three orders of magnitude reduction in the bitrate as compared to the standard audio-video codecs (encoders-decoders), while maintaining equivalent Quality-of-Experience based on a subjective evaluation by users ( n=242 ) in an online study. The Txt2Vid framework opens up the potential for creating novel applications such as enabling audio-video communication during poor internet connectivity, or in remote terrains with limited bandwidth. The code for this work is available at https://github.com/tpulkit/txt2vid.git. © 1983-2012 IEEE.

3.
Adv Sci (Weinh) ; 10(8): e2206674, 2023 03.
Article in English | MEDLINE | ID: covidwho-2172344

ABSTRACT

Deep generative models are attracting attention as a smart molecular design strategy. However, previous models often render molecules with low synthesizability, hindering their real-world applications. Here, a novel graph-based conditional generative model which makes molecules by tailoring retrosynthetically prepared chemical building blocks until achieving target properties in an auto-regressive fashion is proposed. This strategy improves the synthesizability and property control of the resulting molecules and also helps learn how to select appropriate building blocks and bind them together to achieve target properties. By applying a negative sampling method to the selection process of building blocks, this model overcame a critical limitation of previous fragment-based models, which can only use molecules from the training set during generation. As a result, the model works equally well with unseen building blocks without sacrificing computational efficiency. It is demonstrated that the model can generate potential inhibitors with high docking scores against the 3CL protease of SARS-COV-2.


Subject(s)
COVID-19 , Humans , SARS-CoV-2 , Endopeptidases , Models, Molecular
4.
30th International Conference on Software, Telecommunications and Computer Networks, SoftCOM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2146139

ABSTRACT

Deep learning was adopted in de novo drug design for its generative ability in generating novel molecules, by training on a small set of molecules with known biological activity towards the target, the model will be finetuned to generate similar molecules. We proposed a method similar to the process found in evolution algorithms from creating, evaluating, and selecting from a population for fine-tuning the generative model without the need for molecules with known biological activity and applied it to the SARS-CoV-2, the proposed method decreases the time required to search for SARS-CoV-2 main protease inhibitors by developing a predictive model for predicting the affinity score of the molecules which decreases the time needed for docking to a fraction of the original time, we achieved 97.6 % accuracy in predicting the affinity score of molecules thus speeding up the search for existing molecules and the fine-tuning of the generative model to design protease inhibitors for SARS-CoV-2. © 2022 University of Split, FESB.

5.
8th IEEE International Smart Cities Conference, ISC2 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2136376

ABSTRACT

Two years have passed since COVID-19 broke out in Indonesia. In Indonesia, the central and regional governments have used vast amounts of data on COVID-19 patients for policymaking. However, it is clear that privacy problems can arise when people use their data. Thus, it is crucial to keep COVID-19 data private, using synthetic data publishing (SDP). One of the well-known SDP methods is by using deep generative models. This study explores the usage of deep generative models to synthesise COVID-19 individual data. The deep generative models used in this paper are Generative Adversarial Networks (GAN), Adversarial Autoencoders (AAE), and Adversarial Variational Bayes (AVB). This study found that AAE and AVB outperform GAN in loss, distribution, and privacy preservation, mainly when using the Wasserstein approach. Furthermore, the synthetic data produced predictions in the real dataset with sensitivity and an F1 score of more than 0.8. Unfortunately, the synthetic data produced still has drawbacks and biases, especially in conducting statistical models. Therefore, it is essential to improve the deep generative models, especially in maintaining the statistical guarantee of the dataset. © 2022 IEEE.

6.
Sensors (Basel) ; 22(20)2022 Oct 17.
Article in English | MEDLINE | ID: covidwho-2071712

ABSTRACT

Research on face recognition with masked faces has been increasingly important due to the prolonged COVID-19 pandemic. To make face recognition practical and robust, a large amount of face image data should be acquired for training purposes. However, it is difficult to obtain masked face images for each human subject. To cope with this difficulty, this paper proposes a simple yet practical method to synthesize a realistic masked face for an unseen face image. For this, a cascade of two convolutional auto-encoders (CAEs) has been designed. The former CAE generates a pose-alike face wearing a mask pattern, which is expected to fit the input face in terms of pose view. The output of the former CAE is readily fed into the secondary CAE for extracting a segmentation map that localizes the mask region on the face. Using the segmentation map, the mask pattern can be successfully fused with the input face by means of simple image processing techniques. The proposed method relies on face appearance reconstruction without any facial landmark detection or localization techniques. Extensive experiments with the GTAV Face database and Labeled Faces in the Wild (LFW) database show that the two complementary generators could rapidly and accurately produce synthetic faces even for challenging input faces (e.g., low-resolution face of 25 × 25 pixels with out-of-plane rotations).


Subject(s)
COVID-19 , Facial Recognition , Humans , Pandemics , Image Processing, Computer-Assisted/methods , Databases, Factual
7.
25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022 ; 13438 LNCS:3-12, 2022.
Article in English | Scopus | ID: covidwho-2059730

ABSTRACT

The destitution of image data and corresponding expert annotations limit the training capacities of AI diagnostic models and potentially inhibit their performance. To address such a problem of data and label scarcity, generative models have been developed to augment the training datasets. Previously proposed generative models usually require manually adjusted annotations (e.g., segmentation masks) or need pre-labeling. However, studies have found that these pre-labeling based methods can induce hallucinating artifacts, which might mislead the downstream clinical tasks, while manual adjustment could be onerous and subjective. To avoid manual adjustment and pre-labeling, we propose a novel controllable and simultaneous synthesizer (dubbed CS$$

8.
13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2029551

ABSTRACT

Most evolutionary-oriented deep generative models do not explicitly consider the underlying evolutionary dynamics of biological sequences as it is performed within the Bayesian phylogenetic inference framework. In this study, we propose a method for a deep variational Bayesian generative model (EvoVGM) that jointly approximates the true posterior of local evolutionary parameters and generates sequence alignments. Moreover, it is instantiated and tuned for continuous-Time Markov chain substitution models such as JC69, K80 and GTR. We train the model via a low-variance stochastic estimator and a gradient ascent algorithm. Here, we analyze the consistency and effectiveness of EvoVGM on synthetic sequence alignments simulated with several evolutionary scenarios and different sizes. Finally, we highlight the robustness of a fine-Tuned EvoVGM model using a sequence alignment of gene S of coronaviruses. © 2022 Owner/Author.

9.
Revue d'Intelligence Artificielle ; 36(3):381-386, 2022.
Article in English | Scopus | ID: covidwho-1994682

ABSTRACT

The wide spread pandemic COVID-19 has propelled the entire world to rely on social media interaction digitally. Social media is thus a platform to express numerous kinds of direct and indirect sentiments by human beings. Psychologically, a person tends to share his/her feelings in terms of sentiments more openly over the social media. These sentiments, when intense may polarize oneself to commit severe mis-deeds. Here arises the role of the researchers to perform a real time identification of sentiments and classify them so that a prospective mishap can be averted. In this work, an integrated framework is proposed that does an early recognition of sentiments over social media in the digital domain. Along with sentiment categorization, another module has been integrated to the framework to perform a post-predictive analysis of the same. The proposed integrated framework involves combination of two distinct mechanisms. First, the proposed work channelizes the input data in line with its characteristics text, image, and voice. The text input is directly fed to our proposed ‘Lexicon based LSTM with sentiment word mapping’ mechanism. From the input image, both text and semantics are extracted through two different blocks. One block converts image-to-text and redirects the output to the above proposed model. We proposed a new generative model (GM) to extract the semantics of the image and the second block utilizes our generative model and redirects the outcome straight to the final output buffer of the framework. The voice-to-text module has been used for transforming voice input data to text data which is redirected to our proposed Lexicon based LSTM for further processing. A comparison of the proposed work has been made with state-of-the-art techniques. Our results indicate that the overall rate of accuracy of this framework is superior to the existing methods. © 2022 Lavoisier. All rights reserved.

10.
2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, PerCom Workshops 2022 ; : 62-65, 2022.
Article in English | Scopus | ID: covidwho-1874335

ABSTRACT

Although both face recognition and object inpainting have become promising approaches through the use of deep learning, the COVID-19 pandemic has created a tremendous challenge to their further development. Masks, which people have become accustomed to as an effective sanitary measure to prevent infection of COVID-19, have also become an undeniable physical barrier between devices applying face recognition authentication and the faces to be recognized. Therefore, methods that can overcome this dilemma are urgently needed. This study proposes a method that applies a generative model to recognize masked faces based on face inpainting. We introduced a newly proposed identity loss term to conform to the identity information. The reconstructed face will be fed into a face recognition network to extract the feature embeddings for a distance comparison. Taking a naive generative model without an identity loss term introduced as the baseline, the model with an identity loss term improves the recognition accuracy by more than 4%. © 2022 IEEE.

11.
13th International Conference on Intelligent Human Computer Interaction, IHCI 2021 ; 13184 LNCS:106-116, 2022.
Article in English | Scopus | ID: covidwho-1782733

ABSTRACT

Interest in the proper treatment of mental health has been rapidly growing under the steep changes in society, family structure and lifestyle. COVID-19 pandemic in addition drastically accelerates this necessity worldwide, which brings about a huge demand on digital therapeutics for this purpose. One of the key ingredients to this attempt is the appropriately designed practice contents for the prevention and treatment of mental illness. In this paper, we present novel deep generative models to construct the mental training contents based upon mindfulness approach, with a particular focus on providing Acceptance and Commitment Therapy (ACT) on the self-talk techniques. To this end, we first introduce ACT script generator for mindfulness meditation. With over one-thousand sentences collected from the various sources for ACT training practices, we develop a text generative model through fine-tuning on the variant of GPT-2. Next, we introduce a voice generator to implement the self-talk technique, a text-to-speech application using the ACT training script generated above. Computational and human evaluation results demonstrate the high quality of generated training scripts and self-talk contents. To the best of our knowledge, this is the first approach to generate the meditation contents using artificial intelligence techniques, which is able to deeply touch the human mind to care and cure the mental health of individuals. Applications would be main treatment contents for digital therapeutics and meditation curriculum design. © 2022, Springer Nature Switzerland AG.

12.
Diagnostics (Basel) ; 10(11)2020 Nov 03.
Article in English | MEDLINE | ID: covidwho-1256432

ABSTRACT

Computed tomography (CT) images are currently being adopted as the visual evidence for COVID-19 diagnosis in clinical practice. Automated detection of COVID-19 infection from CT images based on deep models is important for faster examination. Unfortunately, collecting large-scale training data systematically in the early stage is difficult. To address this problem, we explore the feasibility of learning deep models for lung and COVID-19 infection segmentation from a single radiological image by resorting to synthesizing diverse radiological images. Specifically, we propose a novel conditional generative model, called CoSinGAN, which can be learned from a single radiological image with a given condition, i.e., the annotation mask of the lungs and infected regions. Our CoSinGAN is able to capture the conditional distribution of the single radiological image, and further synthesize high-resolution (512 × 512) and diverse radiological images that match the input conditions precisely. We evaluate the efficacy of CoSinGAN in learning lung and infection segmentation from very few radiological images by performing 5-fold cross validation on COVID-19-CT-Seg dataset (20 CT cases) and an independent testing on the MosMed dataset (50 CT cases). Both 2D U-Net and 3D U-Net, learned from four CT slices by using our CoSinGAN, have achieved notable infection segmentation performance, surpassing the COVID-19-CT-Seg-Benchmark, i.e., the counterparts trained on an average of 704 CT slices, by a large margin. Such results strongly confirm that our method has the potential to learn COVID-19 infection segmentation from few radiological images in the early stage of COVID-19 pandemic.

SELECTION OF CITATIONS
SEARCH DETAIL